Confidence-and-Refinement Adaptation Model for Cross-Domain Semantic Segmentation
نویسندگان
چکیده
With the rapid development of convolutional neural networks (CNNs), significant progress has been achieved in semantic segmentation. Despite great success, such deep learning approaches require large scale real-world datasets with pixel-level annotations. However, considering that labeling semantics is extremely laborious, many researchers turn to utilize synthetic data free But due clear domain gap, segmentation model trained images tends perform poorly on datasets. Unsupervised adaptation (UDA) for recently gains an increasing research attention, which aims at alleviating discrepancy. Existing methods this scope either simply align features or outputs across source and target domains have deal complex image processing post-processing problems. In work, we propose a novel multi-level UDA named Confidence-and-Refinement Adaptation Model (CRAM), contains confidence-aware entropy alignment (CEA) module style feature (SFA) module. Through CEA, done locally via adversarial output space, making pay attention high-confident predictions. Furthermore, enhance transfer shallow SFA applied minimize appearance gap domains. Experiments two challenging benchmarks “GTA5-to-Cityscapes” “SYNTHIA-to-Cityscapes” demonstrate effectiveness CRAM. We achieve comparable performance existing state-of-the-art works advantages simplicity convergence speed.
منابع مشابه
Unsupervised Domain Adaptation for Semantic Segmentation with GANs
Visual Domain Adaptation is a problem of immense importance in computer vision. Previous approaches showcase the inability of even deep neural networks to learn informative representations across domain shift. This problem is more severe for tasks where acquiring hand labeled data is extremely hard and tedious. In this work, we focus on adapting the representations learned by segmentation netwo...
متن کاملLaplacian Reconstruction and Refinement for Semantic Segmentation
CNN architectures have terrific recognition performance but rely on spatial pooling which makes it difficult to adapt them to tasks that require dense, pixel-accurate labeling. This paper makes two contributions: (1) We demonstrate that while the apparent spatial resolution of convolutional feature maps is low, the high-dimensional feature representation contains significant sub-pixel localizat...
متن کاملA Classification Refinement Strategy for Semantic Segmentation
Based on the observation that semantic segmentation errors are partially predictable, we propose a compact formulation using confusion statistics of the trained classifier to refine (re-estimate) the initial pixel label hypotheses. The proposed strategy is contingent upon computing the classifier confusion probabilities for a given dataset and estimating a relevant prior on the object classes p...
متن کاملSample-oriented Domain Adaptation for Image Classification
Image processing is a method to perform some operations on an image, in order to get an enhanced image or to extract some useful information from it. The conventional image processing algorithms cannot perform well in scenarios where the training images (source domain) that are used to learn the model have a different distribution with test images (target domain). Also, many real world applicat...
متن کاملUnsupervised Language and Acoustic Model Adaptation for Cross Domain Portability
This work investigates the task of porting a broadcast news recognition system to a conversational speech domain, for which only untranscribed acoustic data are available. An iterative adaptation procedure is proposed that alternatively generates automatic speech transcriptions and performs acoustic and language model adaptation. The procedure was applied on a tourist-information conversational...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: IEEE Transactions on Intelligent Transportation Systems
سال: 2022
ISSN: ['1558-0016', '1524-9050']
DOI: https://doi.org/10.1109/tits.2022.3140481